翻訳と辞書
Words near each other
・ Fisher Hall and Marcum Center (Miami University)
・ Fisher Heights
・ Fisher High School
・ Fisher Hill Historic District
・ Fisher Hill Reservoir
・ Fisher Home, Alberta
・ Fisher Homestead
・ Fisher Homestead (Lewes, Delaware)
・ Fisher Horizon
・ Fisher House
・ Fisher House (Hatboro, Pennsylvania)
・ Fisher House Foundation
・ Fisher Hudson
・ Fisher hypothesis
・ Fisher information
Fisher information metric
・ Fisher Island (Antarctica)
・ Fisher Island (disambiguation)
・ Fisher Island (Queensland)
・ Fisher Island (Tasmania)
・ Fisher Island Reef
・ Fisher Island, Florida
・ Fisher Junior/Senior High School (Illinois)
・ Fisher kernel
・ Fisher King
・ Fisher Kondowe
・ Fisher Lake (Michigan)
・ Fisher Landau Center
・ Fisher Library (disambiguation)
・ Fisher Mall


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Fisher information metric : ウィキペディア英語版
Fisher information metric
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.
The metric is interesting in several respects. First, it can be understood to be the infinitesimal form of the relative entropy (''i.e.'', the Kullback–Leibler divergence); specifically, it is the Hessian of the divergence. Alternately, it can be understood as the metric induced by the flat space Euclidean metric, after appropriate changes of variable. When extended to complex projective Hilbert space, it becomes the Fubini–Study metric; when written in terms of mixed states, it is the quantum Bures metric.
Considered purely as a matrix, it is known as the Fisher information matrix. Considered as a measurement technique, where it is used to estimate hidden parameters in terms of observed random variables, it is known as the observed information.
==Definition==
Given a statistical manifold with coordinates \theta=(\theta_1, \theta_2, \ldots, \theta_n), one writes p(x,\theta) for the probability distribution as a function of \theta. Here x is drawn from the value space ''R'' for a (discrete or continuous) random variable ''X''. The probability is normalized by
\int_R p(x,\theta) \,dx = 1
The Fisher information metric then takes the form:
:
g_(\theta)
=
\int_R
\frac
\frac
p(x,\theta) \, dx.

The integral is performed over all values ''x'' in ''R''. The variable \theta is now a coordinate on a Riemann manifold. The labels ''j'' and ''k'' index the local coordinate axes on the manifold.
When the probability is derived from the Gibbs measure, as it would be for any Markovian process, then \theta can also be understood to be a Lagrange multiplier; Lagrange multipliers are used to enforce constraints, such as holding the expectation value of some quantity constant. If there are ''n'' constraints holding ''n'' different expectation values constant, then the dimension of the manifold is ''n'' dimensions smaller than the original space. In this case, the metric can be explicitly derived from the partition function; a derivation and discussion is presented there.
Substituting i(x,\theta) = -\log
p(x,\theta) \, dx
=
\mathrm
\left(\frac
\right
).

To show that the equivalent form equals the above definition note that
:
\mathrm
\left(\frac
\right
)=0

and apply \partial\theta_ on both sides.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Fisher information metric」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.